188 research outputs found

    Compact Finite Differences and Cubic Splines

    Full text link
    In this paper I uncover and explain---using contour integrals and residues---a connection between cubic splines and a popular compact finite difference formula. The connection is that on a uniform mesh the simplest Pad\'e scheme for generating fourth-order accurate compact finite differences gives \textsl{exactly} the derivatives at the interior nodes needed to guarantee twice-continuous differentiability for cubic splines. %I found this connection surprising, because the two problems being solved are different. I also introduce an apparently new spline-like interpolant that I call a compact cubic interpolant; this is similar to one introduced in 1972 by Swartz and Varga, but has higher order accuracy at the edges. I argue that for mildly nonuniform meshes the compact cubic approach offers some potential advantages, and even for uniform meshes offers a simple way to treat the edge conditions, relieving the user of the burden of deciding to use one of the three standard options: free (natural), complete (clamped), or "not-a-knot" conditions. Finally, I establish that the matrices defining the compact cubic splines (equivalently, the fourth-order compact finite difference formul\ae) are positive definite, and in fact totally nonnegative, if all mesh widths are the same sign.Comment: Revised and corrected version. 25 pages, 4 figures; keywords: compact finite differences; cubic splines; barycentric form; compact cubic splines; contour integral methods; totally nonnegative matrice

    Optimal Solution of Linear Ordinary Differential Equations by Conjugate Gradient Method

    Full text link
    Solving initial value problems and boundary value problems of Linear Ordinary Differential Equations (ODEs) plays an important role in many applications. There are various numerical methods and solvers to obtain approximate solutions represented by points. However, few work about optimal solution to minimize the residual can be found in the literatures. In this paper, we first use Hermit cubic spline interpolation at mesh points to represent the solution, then we define the residual error as the square of the L2 norm of the residual obtained by substituting the interpolation solution back to ODEs. Thus, solving ODEs is reduced to an optimization problem in curtain solution space which can be solved by conjugate gradient method with taking advantages of sparsity of the corresponding matrix. The examples of IVP and BVP in the paper show that this method can find a solution with smaller global error without additional mesh points.Comment: 9 pages,6 figure

    Narayana, Mandelbrot, and A New Kind of Companion Matrix

    Full text link
    We demonstrate a new kind of companion matrix, for polynomials of the form c(λ)=λa(λ)b(λ)+c0c(\lambda) = \lambda a(\lambda)b(\lambda) + c_0 where upper Hessenberg companions are known for the polynomials a(λ)a(\lambda) and b(λ)b(\lambda). This construction can generate companion matrices with smaller entries than the Fiedler or Frobenius forms. This generalizes Piers Lawrence's Mandelbrot companion matrix. We motivate the construction by use of Narayana-Mandelbrot polynomials, which are also new to this paper

    Differentiation Matrices for Univariate Polynomials

    Full text link
    We collect here elementary properties of differentiation matrices for univariate polynomials expressed in various bases, including orthogonal polynomial bases and non-degree-graded bases such as Bernstein bases and Lagrange \& Hermite interpolational bases.Comment: 14 pages, two figure

    Minimal height companion matrices for Euclid polynomials

    Full text link
    We define Euclid polynomials Ek+1(λ)=Ek(λ)(Ek(λ)1)+1E_{k+1}(\lambda) = E_{k}(\lambda)\left(E_{k}(\lambda) - 1\right) + 1 and E1(λ)=λ+1E_{1}(\lambda) = \lambda + 1 in analogy to Euclid numbers ek=Ek(1)e_k = E_{k}(1). We show how to construct companion matrices Ek\mathbb{E}_k, so Ek(λ)=det(λIEk)E_k(\lambda) = \operatorname{det}\left(\lambda\mathbf{I} - \mathbb{E}_{k}\right), of height 1 (and thus of minimal height over all integer companion matrices for Ek(λ)E_{k}(\lambda)). We prove various properties of these objects, and give experimental confirmation of some unproved properties.Comment: 15 pages, 7 figure

    Pure tone modes for a 5:3 elliptic drum

    Full text link
    The paper exhibits several standing modes of a 5:3 elliptic drum computed using Mathieu functions. To match the boundary conditions, I used Newton's method on the appropriate modified Mathieu equation using the Squire-Trapp formula for computing derivatives. I tabulate the requisite values of the parameter qq for these low-frequency modes.Comment: 12 pages; 56 figurs; two table

    Inverse Cubic Iteration

    Full text link
    There are thousands of papers on rootfinding for nonlinear scalar equations. Here is one more, to talk about an apparently new method, which I call ``Inverse Cubic Iteration'' (ICI) in analogy to the Inverse Quadratic Iteration in Richard Brent's zeroin method. The possibly new method is based on a cubic blend of tangent-line approximations for the inverse function. We rewrite this iteration for numerical stability as an average of two Newton steps and a secant step: only one new function evaluation and derivative evaluation is needed for each step. The total cost of the method is therefore only trivially more than Newton's method, and we will see that it has order 1+3=2.732...1+\sqrt{3} = 2.732..., thus ensuring that to achieve a given accuracy it usually takes fewer steps than Newton's method while using essentially the same effort per step.Comment: 12 pages, 4 figure

    Stieltjes, Poisson and other integral representations for functions of Lambert WW

    Full text link
    We show that many functions containing WW are Stieltjes functions. Explicit Stieltjes integrals are given for functions 1/W(z)1/W(z), W(z)/zW(z)/z, and others. We also prove a generalization of a conjecture of Jackson, Procacci & Sokal. Integral representations of WW and related functions are also given which are associated with the properties of their being Pick or Bernstein functions. Representations based on Poisson and Burniston--Siewert integrals are given as well

    Symbolic-Numeric Integration of Rational Functions

    Full text link
    We consider the problem of symbolic-numeric integration of symbolic functions, focusing on rational functions. Using a hybrid method allows the stable yet efficient computation of symbolic antiderivatives while avoiding issues of ill-conditioning to which numerical methods are susceptible. We propose two alternative methods for exact input that compute the rational part of the integral using Hermite reduction and then compute the transcendental part two different ways using a combination of exact integration and efficient numerical computation of roots. The symbolic computation is done within BPAS, or Basic Polynomial Algebra Subprograms, which is a highly optimized environment for polynomial computation on parallel architectures, while the numerical computation is done using the highly optimized multiprecision rootfinding package MPSolve. We show that both methods are forward and backward stable in a structured sense and away from singularities tolerance proportionality is achieved by adjusting the precision of the rootfinding tasks.Comment: 25 pages, 4 figures; added a footnote and page number

    Generalized Standard Triples for Algebraic Linearizations of Matrix Polynomials

    Full text link
    We define \emph{generalized standard triples} X\mathbf{X}, zC1C0z\mathbf{C}_{1} - \mathbf{C}_{0}, Y\mathbf{Y} of regular matrix polynomials P(z)Cn×n[z]\mathbf{P}(z) \in \mathbb{C}^{n \times n}[z] in order to use the representation X(zC1  C0)1Y = P1(z)\mathbf{X}(z \mathbf{C}_{1}~-~\mathbf{C}_{0})^{-1}\mathbf{Y}~=~\mathbf{P}^{-1}(z) except if zz is an eigenvalue. This representation can be used in constructing so-called \emph{algebraic linearizations} such as H(z)=zA(z)B(z)+CCn×n[z]\mathbf{H}(z) = z \mathbf{A}(z)\mathbf{B}(z) + \mathbf{C} \in \mathbb{C}^{n \times n}[z] from linearizations for A(z)\mathbf{A}(z) and B(z)\mathbf{B}(z). This can be done even if A(z)\mathbf{A}(z) and B(z)\mathbf{B}(z) are expressed in differing polynomial bases. Our main theorem is that X\mathbf{X} can be expressed using the coefficients of the expression 1=k=0ekϕk(z)1 = \sum_{k=0}^\ell e_k \phi_k(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. We give explicit proofs using the Schur complement for the less familiar bases. We also give a first explicit proof that algebraic linearizations are linearizations, by constructing unimodular matrices E\mathbf{E} and F\mathbf{F} that transform the algebraic linearization to its standard form: EL(z)F=diag(P(z),I,,I)\mathbf{E} \,\mathbf{L}(z)\,\mathbf{F} = \mathrm{diag}(\mathbf{P}(z), \mathbf{I}, \ldots, \mathbf{I}).Comment: 38 page
    corecore